|
The Kadir–Brady saliency detector extracts features of objects in images that are distinct and representative. It was invented by Timor Kadir and Michael Brady〔Scale, Saliency and Image Description. Timor Kadir and Michael Brady. International Journal of Computer Vision. 45 (2):83–105, 2001〕 in 2001 and an affine invariant version was introduced by Kadir and Brady in 2004,〔Kadir, T., Zisserman, A. and Brady, M. An affine invariant salient region detector. Proceedings of the 8th European Conference on Computer Vision, Prague, Czech Republic, 2004〕〔(Zisserman, A. )〕 and a robust version was designed by Shao et al.〔Ling Shao, Timor Kadir and Michael Brady. Geometric and Photometric Invariant Distinctive Regions Detection. Information Sciences. 177 (4):1088-1122, 2007〕 in 2007. The detector uses the algorithms to more efficiently remove background noise and so more easily identify features which can be used in a 3D model. As the detector scans images it uses the three basics of global transformation, local perturbations and intra-class variations to define the areas of search, and identifies unique regions of those images rather than using the more traditional corner or blob searches. It attempts to be invariant to affine transformations and illumination changes.〔 〕 This leads to a more object oriented search than previous methods and outperforms other detectors due to non blurring of the images, an ability to ignore slowly changing regions and a broader definition of surface geometry properties. As a result the Kadir–Brady saliency detector is more capable at object recognition than other detectors whose main focus is on whole image correspondence. ==Introduction== Many computer vision and image processing applications work directly with the features extracted from an image, rather than the raw image; for example, for computing image correspondences, or for learning object categories. Depending on the applications, different characteristics are preferred. However there are three broad classes of image change under which good performance may be required: ''Global transformation'': Features should be repeatable across the expected class of global image transformations. These include both geometric and photometric transformations that arise due to changes in the imaging conditions. For example, region detection should be covariant with viewpoint as illustrated in Figure 1. In short, we require the segmentation to commute with viewpoint change. This property will be evaluated on the repeatability and accuracy of localization and region estimation. ''Local perturbations'': Features should be insensitive to classes of semi-local image disturbances. For example a feature responding to the eye of a human face should be unaffected by any motion of the mouth. A second class of disturbance is where a region neighbours a foreground/background boundary. The detector can be required to detect the foreground region despite changes in the background. ''Intra-class variations'': Features should capture corresponding object parts under intra-class variations in objects. For example the headlight of a car for different brands of car (imaged from the same viewpoint). All Feature detection algorithms attempt to detect regions which are stable under the three types of image change described above. Instead of finding a corner, or blob, or any specific shape of region, the Kadir–Brady saliency detector looks for regions which are locally complex, and globally discriminative. Such regions usually correspond to regions more stable under these types of image change. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Kadir–Brady saliency detector」の詳細全文を読む スポンサード リンク
|